Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
IEEE Access ; : 1-1, 2023.
Artigo em Inglês | Scopus | ID: covidwho-20243873

RESUMO

As intelligent driving vehicles came out of concept into people’s life, the combination of safe driving and artificial intelligence becomes the new direction of future transportation development. Autonomous driving technology is developing based on control algorithms and model recognitions. In this paper, a cloud-based interconnected multi-sensor fusion autonomous vehicle system is proposed that uses deep learning (YOLOv4) and improved ORB algorithms to identify pedestrians, vehicles, and various traffic signs. A cloud-based interactive system is built to enable vehicle owners to master the situation of their vehicles at any time. In order to meet multiple application of automatic driving vehicles, the environment perception technology of multi-sensor fusion processing has broadened the uses of automatic driving vehicles by being equipped with automatic speech recognition (ASR), vehicle following mode and road patrol mode. These functions enable automatic driving to be used in applications such as agricultural irrigation, road firefighting and contactless delivery under new coronavirus outbreaks. Finally, using the embedded system equipment, an intelligent car was built for experimental verification, and the overall recognition accuracy of the system was over 96%. Author

2.
Lecture Notes in Electrical Engineering ; 954:421-430, 2023.
Artigo em Inglês | Scopus | ID: covidwho-20233444

RESUMO

This paper proposes a novel and robust technique for remote cough recognition for COVID-19 detection. This technique is based on sound and image analysis. The objective is to create a real-time system combining artificial intelligence (AI) algorithms, embedded systems, and network of sensors to detect COVID-19-specific cough and identify the person who coughed. Remote acquisition and analysis of sounds and images allow the system to perform both detection and classification of the detected cough using AI algorithms and image processing to identify the coughing person. This will give the ability to distinguish between a normal person and a person carrying the COVID-19 virus. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

3.
Journal of Robotics and Mechatronics ; 35(2):328-337, 2023.
Artigo em Inglês | ProQuest Central | ID: covidwho-2315351

RESUMO

This study presents the positioning method and autonomous flight of a quadrotor drone using ultra-wideband (UWB) communication and an optical flow sensor. UWB communication obtains the distance between multiple ground stations and a mobile station on a robot, and the position is calculated based on a multilateration method similar to global positioning system (GPS). The update rate of positioning using only UWB communication devices is slow;hence, we improved the update rate by combining the UWB and inertial measurement unit (IMU) sensor in the prior study. This study demonstrates the improvement of the positioning method and accuracy by sensor fusion of the UWB device, an IMU, and an optical flow sensor using the extended Kalman filter. The proposed method is validated by hovering and position control experiments and also realizes a sufficient rate and accuracy for autonomous flight.

4.
Isprs International Journal of Geo-Information ; 12(2), 2023.
Artigo em Inglês | Web of Science | ID: covidwho-2307293

RESUMO

The objective of this systematic review was to analyze the recently published literature on the Internet of Robotic Things (IoRT) and integrate the insights it articulates on big data management algorithms, deep learning-based object detection technologies, and geospatial simulation and sensor fusion tools. The research problems were whether computer vision techniques, geospatial data mining, simulation-based digital twins, and real-time monitoring technology optimize remote sensing robots. Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines were leveraged by a Shiny app to obtain the flow diagram comprising evidence-based collected and managed data (the search results and screening procedures). Throughout January and July 2022, a quantitative literature review of ProQuest, Scopus, and the Web of Science databases was performed, with search terms comprising "Internet of Robotic Things" + "big data management algorithms", "deep learning-based object detection technologies", and "geospatial simulation and sensor fusion tools". As the analyzed research was published between 2017 and 2022, only 379 sources fulfilled the eligibility standards. A total of 105, chiefly empirical, sources have been selected after removing full-text papers that were out of scope, did not have sufficient details, or had limited rigor For screening and quality evaluation so as to attain sound outcomes and correlations, we deployed AMSTAR (Assessing the Methodological Quality of Systematic Reviews), AXIS (Appraisal tool for Cross-Sectional Studies), MMAT (Mixed Methods Appraisal Tool), and ROBIS (to assess bias risk in systematic reviews). Dimensions was leveraged as regards initial bibliometric mapping (data visualization) and VOSviewer was harnessed in terms of layout algorithms.

5.
IEEE Transactions on Mobile Computing ; 22(5):2551-2568, 2023.
Artigo em Inglês | Scopus | ID: covidwho-2306810

RESUMO

Multi-modal sensors on mobile devices (e.g., smart watches and smartphones) have been widely used to ubiquitously perceive human mobility and body motions for understanding social interactions between people. This work investigates the correlations between the multi-modal data observed by mobile devices and social closeness among people along their trajectories. To close the gap between cyber-world data distances and physical-world social closeness, this work quantifies the cyber distances between multi-modal data. The human mobility traces and body motions are modeled as cyber signatures based on ambient Wi-Fi access points and accelerometer data observed by mobile devices that explicitly indicate the mobility similarity and movement similarity between people. To verify the merits of modeled cyber distances, we design the localization-free CybeR-physIcal Social dIStancing (CRISIS) system that detects if two persons are physically non-separate (i.e., not social distancing) due to close social interactions (e.g., taking similar mobility traces simultaneously or having a handshake with physical contact). Extensive experiments are conducted in two small-scale environments and a large-scale environment with different densities of Wi-Fi networks and diverse mobility and movement scenarios. The experimental results indicate that our approach is not affected by uncertain environmental conditions and human mobility with an overall detection accuracy of 98.41% in complex mobility scenarios. Furthermore, extensive statistical analysis based on 2-dimensional (2D) and 3-dimensional (3D) mobility datasets indicates that the proposed cyber distances are robust and well-synchronized with physical proximity levels. © 2002-2012 IEEE.

6.
30th ACM International Conference on Multimedia, MM 2022 ; : 7386-7388, 2022.
Artigo em Inglês | Scopus | ID: covidwho-2302949

RESUMO

The fifth ACM International Workshop on Multimedia Content Analysis in Sports (ACM MMSports'22) is part of the ACM International Conference on Multimedia 2022 (ACM Multimedia 2022). After two years of pure virtual MMSports workshops due to COVID-19, MMSports'22 is held on-site again. The goal of this workshop is to bring together researchers and practitioners from academia and industry to address challenges and report progress in mining, analyzing, understanding, and visualizing multimedia/multimodal data in sports, sports broadcasts, sports games and sports medicine. The combination of sports and modern technology offers a novel and intriguing field of research with promising approaches for visual broadcast augmentation and understanding, for statistical analysis and evaluation, and for sensor fusion during workouts as well as competitions. There is a lack of research communities focusing on the fusion of multiple modalities. We are helping to close this research gap with this workshop series on multimedia content analysis in sports. Related Workshop Proceedings are available in the ACM DL at: https://dl.acm.org/doi/proceedings/10.1145/3552437. © 2022 Owner/Author.

7.
Sci Afr ; 20: e01676, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: covidwho-2294621

RESUMO

Rehabilitation services are among the most severely impacted by the COVID-19 pandemic. This has increased the number of people not receiving the needed rehabilitation care. Home-based rehabilitation becomes alternative support to face this greater need. However, monitoring kinematics parameters during rehabilitation exercises is critical for an effective recovery. This work proposes a detailed framework to estimate knee kinematics using a wearable Magnetic and Inertial Measurement Unit (MIMU). That allows at-home monitoring for knee rehabilitation progress. Two MIMU sensors were attached to the shank and thigh segments respectively. First, the absolute orientation of each sensor was estimated using a sensor fusion algorithm. Second, these sensor orientations were transformed to segments orientations using a functional sensor-to-segment (STS) alignment. Third, the relative orientation between segments, i.e., knee joint angle, was computed and the relevant kinematics parameters were extracted. Then, the validity of our approach was evaluated with a gold-standard optoelectronic system. Seven participants completed three to five Timed-Up-and-Go (TUG) tests. The estimated knee angle was compared to the reference angle. Root-mean-square error (RMSE), correlation coefficient, and Bland-Altman analysis were considered as evaluation metrics. Our results showed reasonable accuracy (RMSE < 8°), strong to very-strong correlation (correlation coefficient > 0.86), a mean difference within 1.1°, and agreement limits from -16° to 14°. In addition, no significant difference was found (p-value > 0.05) in extracted kinematics parameters between both systems. The proposed approach might represent a suitable alternative for the assessment of knee rehabilitation progress in a home context.

8.
Human-Centric Computing and Information Sciences ; 13, 2023.
Artigo em Inglês | Web of Science | ID: covidwho-2232517

RESUMO

In epidemic prevention and control measures, unmanned devices based on autonomous driving technology have stepped into the front lines of epidemic prevention, playing a vital role in epidemic prevention measures such as protective measures detection. Autonomous positioning technology is one of the key technologies of autonomous driving. The realization of high-precision positioning can provide accurate location epidemic prevention services and a refined intelligent management system for the government and citizens. In this paper, we propose an unmanned vehicle (UV) positioning system REW_SLAM based on lidar and stereo camera, which realize real-time online pose estimation of UV by using high-precision lidar pose correction visual positioning data. A six-element extended Kalman filter (6-element EKF) is proposed to fusion lidar and stereo camera sensors information, which retains the second-order Taylor series of observation and state equation, and effectively improves the accuracy of data fusion. Meanwhile, considering improving lidar outputs quality, a modified wavelet denoising method is introduced to preprocess the original data of lidar. Our approach was tested on KITTI datasets and real UV platform, respectively. By comparing with the other two algorithms, the relative pose error and absolute trajectory error of this algorithm are increased by 0.26 m and 2.36 m on average, respectively, while the CPU occupancy rate is increased by 6.685% on average, thereby proving the robustness and effectiveness of the algorithm.

9.
IEEE Sensors Journal ; : 1-1, 2022.
Artigo em Inglês | Scopus | ID: covidwho-2136429

RESUMO

Due to the COVID-19 global pandemic, there are more needs for remote patient care especially in rehabilitation requiring direct contact. However, traditional Chinese rehabilitation technologies, such as gua sha, often need to be implemented by well-trained professionals. To automate and professionalize gua sha, it is necessary to record the nursing and rehabilitation process and reproduce the process in developing smart gua sha equipment. This paper proposes a new signal processing and sensor fusion method for developing a piece of smart gua sha equipment. A novel stabilized numerical integration method based on information fusion and detrended fluctuation analysis (SNIF-DFA) is performed to obtain the velocity and displacement information during gua sha operation. The experimental results show that the proposed method outperforms the traditional numerical integration method with respect to information accuracy and realizes accurate position calculations. This is of great significance in developing robots or automated machines that reproduce the nursing and rehabilitation operations of medical professionals. IEEE

10.
Dissertation Abstracts International: Section B: The Sciences and Engineering ; 83(10-B):No Pagination Specified, 2022.
Artigo em Inglês | APA PsycInfo | ID: covidwho-2012045

RESUMO

Public transit stations and hubs are difficult to navigate for people with visual impairments. Moreover, public transit has been affected disproportionately by the social distancing requirements consequent to the COVID-19 pandemic. It is the objective of this dissertation to provide a technology for addressing these concerns in the frame of a mobile app named RouteMe2. The technology provides micro- routing and guidance to visually impaired travelers through complex routes in transit hubs. This work also includes the study to monitor the distance between the travelers inside the bus for social distancing application. Reducing the risk of air-born viral infections by social distancing can contribute to improving the overall safety of the public transit.The key enablers of this technology are sufficiently accurate self-localization and micro-routing as well as effective communication of the contextual spatiotemporal information with the visually impaired users. The accuracy of the self- localization in the outdoor environments is challenged by poor Global Positioning System (GPS) reception due to tall nearby buildings that may obscure view of one or more satellites - a.k.a shading. Shading is very common in urban environments, and is a major cause of GPS failure. In order to mitigate the effect of shading, I statistically fuse the signals received from GPS as well as a small number of Bluetooth Low Energy (BLE) beacons. I further pair the statistical fusion with a Bayes discrete filter tracker to increase the self-localization accuracy. Experiments were conducted at San Jose Diridon light rail station to quantitatively assess the performance of the resulting system.I have designed and implemented certain features and functionalities of RouteMe2 to provide effective communication of the in-context spatio-temporal information with visually impaired users while they use the app. I leveraged our previously published focus group study conducted with visually impaired people as well as reviewing the user interface of the existing related apps to improve the user experience of RouteMe2 the detail of which is presented.I further assess the ability of two RSSI-based methods at detecting interpersonal distances shorter than 1 or 2 meters. One method uses the power received from the smartphone carried by another person. The other method measures the disparity in the power received by the two smartphones from one or more fixed BLE beacons. The results show that use of the RSSI disparity enables discrimination measures that are as good or better than using the RSSI received from another smartphone. I demonstrate the potential of a system that uses BLE beacons, placed inside a vehicle, to localize a passenger within the length of the vehicle with an accuracy better than 1 meter. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

11.
Sensors (Basel) ; 22(14)2022 Jul 19.
Artigo em Inglês | MEDLINE | ID: covidwho-1938962

RESUMO

We present a multi-sensor data fusion model based on a reconfigurable module (RM) with three fusion layers. In the data layer, raw data are refined with respect to the sensor characteristics and then converted into logical values. In the feature layer, a fusion tree is configured, and the values of the intermediate nodes are calculated by applying predefined logical operations, which are adjustable. In the decision layer, a final decision is made by computing the value of the root according to predetermined equations. In this way, with given threshold values or sensor characteristics for data refinement and logic expressions for feature extraction and decision making, we reconstruct an RM that performs multi-sensor fusion and is adaptable for a dedicated application. We attempted to verify its feasibility by applying the proposed RM to an actual application. Considering the spread of the COVID-19 pandemic, an unmanned storage box was selected as our application target. Four types of sensors were used to determine the state of the door and the status of the existence of an item inside it. We implemented a prototype system that monitored the unmanned storage boxes by configuring the RM according to the proposed method. It was confirmed that a system built with only low-cost sensors can identify the states more reliably through multi-sensor data fusion.


Assuntos
COVID-19 , Pandemias , Humanos
12.
Sensors (Basel) ; 22(13)2022 Jun 25.
Artigo em Inglês | MEDLINE | ID: covidwho-1911520

RESUMO

At present, the COVID-19 pandemic still presents with outbreaks occasionally, and pedestrians in public areas are at risk of being infected by the viruses. In order to reduce the risk of cross-infection, an advanced pedestrian state sensing method for automated patrol vehicles based on multi-sensor fusion is proposed to sense pedestrian state. Firstly, the pedestrian data output by the Euclidean clustering algorithm and the YOLO V4 network are obtained, and a decision-level fusion method is adopted to improve the accuracy of pedestrian detection. Then, combined with the pedestrian detection results, we calculate the crowd density distribution based on multi-layer fusion and estimate the crowd density in the scenario according to the density distribution. In addition, once the crowd aggregates, the body temperature of the aggregated crowd is detected by a thermal infrared camera. Finally, based on the proposed method, an experiment with an automated patrol vehicle is designed to verify the accuracy and feasibility. The experimental results have shown that the mean accuracy of pedestrian detection is increased by 17.1% compared with using a single sensor. The area of crowd aggregation is divided, and the mean error of the crowd density estimation is 3.74%. The maximum error between the body temperature detection results and thermometer measurement results is less than 0.8°, and the abnormal temperature targets can be determined in the scenario, which can provide an efficient advanced pedestrian state sensing technique for the prevention and control area of an epidemic.


Assuntos
Técnicas Biossensoriais , COVID-19 , Pedestres , COVID-19/epidemiologia , COVID-19/prevenção & controle , Aglomeração , Humanos , Pandemias/prevenção & controle
13.
Ieee Sensors Journal ; 22(10):9568-9579, 2022.
Artigo em Inglês | Web of Science | ID: covidwho-1868548

RESUMO

Airborne transmittable diseases such as COVID-19 spread from an infected to healthy person when they are in proximity to each other. Epidemiologists suggest that the risk of COVID-19 transmission increases when an infected person is within 6 feet from a healthy person and contact between them lasts longer than 15 minutes (also called Too Close For Too Long (TC4TL). In this paper, we systematically investigate Machine Learning (ML) methods to detect proximity by analyzing publicly available dataset gathered from smartphones' built-in Bluetooth, accelerometer, and gyroscope sensors. We extract 20 statistical features from accelerometer and gyroscope sensors signals and 28 statistical features of Bluetooth signal, which are classified to determine whether subjects are closer than 6 feet as well as the subjects' context. Using machine learning regression, we also estimate the range between the subjects. Among the 19 ML classification and regression methods that we explored, we found that ensemble (boosted and bagged trees) methods perform best with accelerometer and gyroscope data while regression trees ML algorithm performs best with the Bluetooth signal. We further explore sensor fusion methods and demonstrate that the combination of all three sensors achieves a higher accuracy of range estimation than when using each individual sensor. We show that proximity (< 6ft or not) can be classified with 72%-90% accuracy using the accelerometer, 78%-84% accuracy using gyroscope sensor, and with 76%-92% accuracy with the Bluetooth data. Our model outperforms the current state-of-the-art methods using neural networks and achieved a Normalized Decision Cost Function (nDCF) score of 0.34 with Bluetooth radio and 0.36 with sensor fusion.

14.
73rd IEEE National Aerospace and Electronics Conference (NAECON) ; : 415-422, 2021.
Artigo em Inglês | Web of Science | ID: covidwho-1849305

RESUMO

Growing surge of misinformation among COVID-19 can post great hindrance to truth, it can magnify distrust in policy makers and/or degrade authorities' credibility, and it can even harm public health. Classification of textual context on social media data relating COVID-19 is an effective tool to combat misinformation on social media platforms. We leveraged Twitter data in developing classification methods to detect misinformation and to identify tweet sentiment. Six fusion-based classification models were built fusing three classical machine learning algorithms: multinomial nave Bayes, logistic regression, and support vector classifier. The best performing models were selected to detect misinformation and to classify sentiment on tweets that were created during early outbreak of COVID-19 pandemic and the fifth month into pandemic. We found that majority of the public held positive sentiment toward all six types of misinformation news on Twitter social media platform. Except political or biased news, general public expressed more positively toward unreliable, conspiracy, clickbait, unreliable with political/biased, and clickbait with political/biased news later in the summer month than earlier during the outbreak. The results provide decision or policy makers valuable knowledge gain in public opinion towards various types of misinformation spreading over social media.

15.
International Journal of Intelligent Systems and Applications in Engineering ; 10(1):122-128, 2022.
Artigo em Inglês | Scopus | ID: covidwho-1811671

RESUMO

As per World Health Organization (WHO), avoiding touching the face when people are in public or crowded places is an effective way to prevent respiratory viral infections. This recommendation has become more crucial with the current health crisis and the worldwide spread of COVID-19 pandemic. However, most face touches are done unconsciously, that is why it is difficult for people to monitor their hand moves and try to avoid touching the face all the time. Hand-worn wearable devices like smartwatches are equipped with multiple sensors that can be utilized to track hand moves automatically. This work proposes a smartwatch application that uses small, efficient, and end-to-end Convolutional Neural Networks (CNN) models to classify hand motion and identify Face-Touch moves. To train the models, a large dataset is collected for both left and right hands with over 28k training samples that represents multiple hand motion types, body positions, and hand orientations. The app provides real-time feedback and alerts the user with vibration and sound whenever attempting to touch the face. Achieved results show state of the art face-touch accuracy with average recall, precision, and F1-Score of 96.75%, 95.1%, 95.85% respectively, with low False Positives Rate (FPR) as 0.04%. By using efficient configurations and small models, the app achieves high efficiency and can run for long hours without significant impact on battery which makes it applicable on most off-the-shelf smartwatches. © 2022, Ismail Saritas. All rights reserved.

16.
Healthcare (Basel) ; 10(3)2022 Feb 28.
Artigo em Inglês | MEDLINE | ID: covidwho-1715267

RESUMO

This paper employs a unique sensor fusion (SF) approach to detect a COVID-19 suspect and the enhanced MobileNetV2 model is used for face mask detection on an Internet-of-Things (IoT) platform. The SF algorithm avoids incorrect predictions of the suspect. Health data are continuously monitored and recorded on the ThingSpeak cloud server. When a COVID-19 suspect is detected, an emergency email is sent to healthcare personnel with the GPS position of the suspect. A lightweight and fast deep learning model is used to recognize appropriate mask positioning; this restricts virus transmission. When tested with the real-world masked face dataset (RMFD) dataset, the enhanced MobileNetV2 neural network is optimal for Raspberry Pi. Our IoT device and deep learning model are 98.50% (compared to commercial devices) and 99.26% accurate, respectively, and the time required for face mask evaluation is 31.1 milliseconds. The proposed device is useful for remote monitoring of covid patients. Thus, the method will find medical application in the detection of COVID-19-positive patients. The device is also wearable.

17.
Clin Chem ; 68(1): 125-133, 2021 12 30.
Artigo em Inglês | MEDLINE | ID: covidwho-1598770

RESUMO

BACKGROUND: Artificial intelligence (AI) and machine learning (ML) are poised to transform infectious disease testing. Uniquely, infectious disease testing is technologically diverse spaces in laboratory medicine, where multiple platforms and approaches may be required to support clinical decision-making. Despite advances in laboratory informatics, the vast array of infectious disease data is constrained by human analytical limitations. Machine learning can exploit multiple data streams, including but not limited to laboratory information and overcome human limitations to provide physicians with predictive and actionable results. As a quickly evolving area of computer science, laboratory professionals should become aware of AI/ML applications for infectious disease testing as more platforms are become commercially available. CONTENT: In this review we: (a) define both AI/ML, (b) provide an overview of common ML approaches used in laboratory medicine, (c) describe the current AI/ML landscape as it relates infectious disease testing, and (d) discuss the future evolution AI/ML for infectious disease testing in both laboratory and point-of-care applications. SUMMARY: The review provides an important educational overview of AI/ML technique in the context of infectious disease testing. This includes supervised ML approaches, which are frequently used in laboratory medicine applications including infectious diseases, such as COVID-19, sepsis, hepatitis, malaria, meningitis, Lyme disease, and tuberculosis. We also apply the concept of "data fusion" describing the future of laboratory testing where multiple data streams are integrated by AI/ML to provide actionable clinical knowledge.


Assuntos
Inteligência Artificial , Doenças Transmissíveis , Aprendizado de Máquina , Doenças Transmissíveis/diagnóstico , Humanos
18.
Front Public Health ; 9: 797808, 2021.
Artigo em Inglês | MEDLINE | ID: covidwho-1581100

RESUMO

The presented deep learning and sensor-fusion based assistive technology (Smart Facemask and Thermal scanning kiosk) will protect the individual using auto face-mask detection and auto thermal scanning to detect the current body temperature. Furthermore, the presented system also facilitates a variety of notifications, such as an alarm, if an individual is not wearing a mask and detects thermal temperature beyond the standard body temperature threshold, such as 98.6°F (37°C). Design/methodology/approach-The presented deep Learning and sensor-fusion-based approach can also detect an individual in with or without mask situations and provide appropriate notification to the security personnel by raising the alarm. Moreover, the smart tunnel is also equipped with a thermal sensing unit embedded with a camera, which can detect the real-time body temperature of an individual concerning the prescribed body temperature limits as prescribed by WHO reports. Findings-The investigation results validate the performance evaluation of the presented smart face-mask and thermal scanning mechanism. The presented system can also detect an outsider entering the building with or without mask condition and be aware of the security control room by raising appropriate alarms. Furthermore, the presented smart epidemic tunnel is embedded with an intelligent algorithm that can perform real-time thermal scanning of an individual and store essential information in a cloud platform, such as Google firebase. Thus, the proposed system favors society by saving time and helps in lowering the spread of coronavirus.


Assuntos
COVID-19 , Aprendizado Profundo , Algoritmos , Surtos de Doenças/prevenção & controle , Humanos , Máscaras
19.
Sensors (Basel) ; 21(18)2021 Sep 08.
Artigo em Inglês | MEDLINE | ID: covidwho-1468446

RESUMO

Edge intelligence (EI) has received a lot of interest because it can reduce latency, increase efficiency, and preserve privacy. More significantly, as the Internet of Things (IoT) has proliferated, billions of portable and embedded devices have been interconnected, producing zillions of gigabytes on edge networks. Thus, there is an immediate need to push AI (artificial intelligence) breakthroughs within edge networks to achieve the full promise of edge data analytics. EI solutions have supported digital technology workloads and applications from the infrastructure level to edge networks; however, there are still many challenges with the heterogeneity of computational capabilities and the spread of information sources. We propose a novel event-driven deep-learning framework, called EDL-EI (event-driven deep learning for edge intelligence), via the design of a novel event model by defining events using correlation analysis with multiple sensors in real-world settings and incorporating multi-sensor fusion techniques, a transformation method for sensor streams into images, and lightweight 2-dimensional convolutional neural network (CNN) models. To demonstrate the feasibility of the EDL-EI framework, we presented an IoT-based prototype system that we developed with multiple sensors and edge devices. To verify the proposed framework, we have a case study of air-quality scenarios based on the benchmark data provided by the USA Environmental Protection Agency for the most polluted cities in South Korea and China. We have obtained outstanding predictive accuracy (97.65% and 97.19%) from two deep-learning models on the cities' air-quality patterns. Furthermore, the air-quality changes from 2019 to 2020 have been analyzed to check the effects of the COVID-19 pandemic lockdown.


Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , Controle de Doenças Transmissíveis , Humanos , Inteligência , Pandemias , SARS-CoV-2 , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA